关于 Kubernetes中Persistent Volume的一些笔记

傍晚时分,你坐在屋檐下,看着天慢慢地黑下去,心里寂寞而凄凉,感到自己的生命被剥夺了。当时我是个年轻人,但我害怕这样生活下去,衰老下去。在我看来,这是比死亡更可怕的事。——–王小波

写在前面


  • 這部分筆記原本是和Volume的筆記整理到了一起
  • 查閲特別不方便,所以單獨整理
  • 博文内容涉及:
    • PV+PVC+SC相关知识总结
  • 理解不足小伙伴帮忙指正

傍晚时分,你坐在屋檐下,看着天慢慢地黑下去,心里寂寞而凄凉,感到自己的生命被剥夺了。当时我是个年轻人,但我害怕这样生活下去,衰老下去。在我看来,这是比死亡更可怕的事。——–王小波


持久性存储(Persistent Volume)

在K8s的體系中,數據存儲相关的资源对象有 secretsconfigmapvalume,和我们今天要讲的 Persistent Volume,secrets 用于涉密相关存储, cm 用于涉及配置文件相关,

Volume是定义在Pod上的,属于“计算资源”的一部分,而实际上, “网络存储”是相对独立于“计算资源”而存在的一种实体资源。比如在使用虚拟机的情况下,我们通常会先定义一个网络存储,然后从中划出一个“网盘”并挂接到虚拟机

Persistent Volume(简称PV)和与之相关联的Persistent Volume Claim (简称PVC)也起到了类似的作用。PV可以理解成 Kubernetes集群中的某个网络存储中对应的一块存储 ,它与Volume很类似,但有以下区别。

这里也可以结合物理盘区和逻辑卷来理解,PV可以理解为物理卷,PVC可以理解为划分的逻辑卷。

Persistent VolumeVolume的区别

  • PV只能是网络存储,不属于任何Node,但可以在每个Node上访问。
  • PV并不是定义在Pod上的,而是独立于Pod之外定义。
  • PV目前支持的类型包括: gcePersistentDisk、 AWSElasticBlockStore, AzureFileAzureDisk, FC (Fibre Channel). Flocker, NFS, isCSI, RBD (Rados Block Device)CephFS. Cinder, GlusterFS. VsphereVolume. Quobyte Volumes, VMware Photon.PortworxVolumes, ScalelO Volumes和HostPath (仅供单机测试)。

PV的创建

这里創建一個NFS的PV,下面為 yaml 文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
#storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: vms81.liruilongs.github.io

需要注意 PV的 accessModes 属性, 目前有以下类型:

  • ReadWriteOnce:读写权限、并且只能被单个Node挂载。
  • ReadOnlyMany:只读权限、允许被多个Node挂载。
  • ReadWriteMany:读写权限、允许被多个Node挂载。
1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volunms-pv.yaml

我們需要搭建一個NFS服務器,這裏爲了方便,在當前節點做為服務端部署。

部署一个NFSServer

使用NFS网络文件系统提供的共享目录存储数据时,我们需要在系统中部署一个NFSServer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
┌──[root@vms81.liruilongs.github.io]-[~]
└─$yum -y install nfs-utils.x86_64
┌──[root@vms81.liruilongs.github.io]-[~]
└─$systemctl enable nfs-server.service --now
┌──[root@vms81.liruilongs.github.io]-[~]
└─$mkdir -p /liruilong
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$cd /liruilong/;echo `date` > liruilong.txt
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$cd /liruilong/;cat liruilong.txt
2021年 11月 27日 星期六 21:57:10 CST
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$cat /etc/exports
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$echo "/liruilong *(rw,sync,no_root_squash)" > /etc/exports
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$exportfs -arv
exporting *:/liruilong
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$showmount -e
Export list for vms81.liruilongs.github.io:
/liruilong *
┌──[root@vms81.liruilongs.github.io]-[/liruilong]
└─$

然后我们需要在所有的工作节点安装nfs-utils,然后挂载

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "yum -y install nfs-utils"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "systemctl enable nfs-server.service --now"

nfs共享文件测试

1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "showmount -e vms81.liruilongs.github.io"
192.168.26.83 | CHANGED | rc=0 >>
Export list for vms81.liruilongs.github.io:
/liruilong *
192.168.26.82 | CHANGED | rc=0 >>
Export list for vms81.liruilongs.github.io:
/liruilong *
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

挂载测试

1
2
3
4
5
6
7
8
9
10
11
12
13
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "mount vms81.liruilongs.github.io:/liruilong /mnt"

192.168.26.82 | CHANGED | rc=0 >>

192.168.26.83 | CHANGED | rc=0 >>

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "cd /mnt/;ls"
192.168.26.83 | CHANGED | rc=0 >>
liruilong.txt
192.168.26.82 | CHANGED | rc=0 >>
liruilong.txt
1
2
3
4
5
6
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "df -h | grep liruilong"
192.168.26.82 | CHANGED | rc=0 >>
vms81.liruilongs.github.io:/liruilong 150G 8.3G 142G 6% /mnt
192.168.26.83 | CHANGED | rc=0 >>
vms81.liruilongs.github.io:/liruilong 150G 8.3G 142G 6% /mnt

取消挂载的方式

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "umount /mnt"

下面我們添加一個用於測試的目錄

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /etc/exports
/liruilong *(rw,sync,no_root_squash)
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$echo "/tmp *(rw,sync,no_root_squash)" >>/etc/exports
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cat /etc/exports
/liruilong *(rw,sync,no_root_squash)
/tmp *(rw,sync,no_root_squash)
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$exportfs -avr
exporting *:/tmp
exporting *:/liruilong

通過 yaml 文件來創建PV

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volunms-pv.yaml
persistentvolume/pv0003 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
pv0003 5Gi RWO Recycle Available 16s Filesystem

PV是有状态的对象,它有以下几种状态。

状态
Available:空闲状态。
Bound:已经绑定到某个PVC上。
Released:对应的PVC已经删除,但资源还没有被集群收回。
Failed: PV自动回收失败。

PVC的创建

如果某个Pod想申请某种类型的PV,则首先需要定义一个PersistentVolumeClaim (PVC)对象:

PVC是基于命名空间相互隔离的,不同命名空间的PVC相互隔离PVC通过accessModes和storage的约束关系来匹配PV,不需要显示定义,accessModes必须相同,storage必须小于等于。

1
2
3
4
5
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc
No resources found in liruilong-volume-create namespace.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volumes-pvc.yaml

定义资源文件

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc01
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
#storageClassName: slow

创建 PVC

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumes-pvc.yaml
persistentvolumeclaim/mypvc01 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
mypvc01 Bound pv0003 5Gi RWO 10s Filesystem
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$

创建好了PVC和PV,看看如何和绑定,这里就要用到 storageClassName

storageClassName

storageClassName 用于控制那个PVC能和PV绑定,只有在storageClassName相同的情况下才去匹配storage和accessModes

1
2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$vim pod_volunms-pv.yaml

pod_volunms-pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: vms81.liruilongs.github.io
1
2
3
4
5
6
7
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volunms-pv.yaml
persistentvolume/pv0003 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0003 5Gi RWO Recycle Available slow 8s

pod_volumes-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc01
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: slow
1
2
3
4
5
6
7
8
9
10
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc -A
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumes-pvc.yaml
persistentvolumeclaim/mypvc01 created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
liruilong-volume-create mypvc01 Bound pv0003 5Gi RWO slow 5s

使用持久性存储

在pod里面使用PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumepvc
name: podvolumepvc
spec:
volumes:
- name: volumes1
persistentVolumeClaim:
claimName: mypvc01
containers:
- image: nginx
name: podvolumehostpath
resources: {}
volumeMounts:
- mountPath: /liruilong
name: volumes1
imagePullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_volumespvc.yaml
pod/podvolumepvc created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podvolumepvc 1/1 Running 0 15s 10.244.171.184 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl exec -it podvolumepvc -- sh
# ls
bin dev docker-entrypoint.sh home lib64 media opt root sbin sys usr
boot docker-entrypoint.d etc lib liruilong mnt proc run srv tmp var
# cd liruilong
# ls
runc-process838092734
systemd-private-66344110bb03430193d445f816f4f4c4-chronyd.service-SzL7id
systemd-private-6cf1f72056ed4482a65bf89ec2a130a9-chronyd.service-5m7c2i
systemd-private-b1dc4ffda1d74bb3bec5ab11e5832635-chronyd.service-cPC3Bv
systemd-private-bb19f3d6802e46ab8dcb5b88a38b41b8-chronyd.service-cjnt04
#

pv回收策略

persistentVolumeReclaimPolicy: Recycle

策略 描述
Recycle –会删除数据 会生成一个pod回收数据,删除pvc之后,pv可复用,pv状态由Released变为Available
Retain–不回收数据 但是删除pvc之后,pv依然不可用,pv状态长期保持为 Released

会生成一个pod回收数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0003 5Gi RWO Recycle Bound liruilong-volume-create/mypvc01 slow 131m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl describe pv pv0003
..................
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal RecyclerPod 53s persistentvolume-controller Recycler pod: Successfully assigned default/recycler-for-pv0003 to vms82.liruilongs.github.io
Normal RecyclerPod 51s persistentvolume-controller Recycler pod: Pulling image "busybox:1.27"
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0003 5Gi RWO Recycle Available slow 136m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$

动态卷供应storageClass

通过storageClass来动态处理PV的创建,管理员只需要创建好storageClass就可以了,用户创建PVC时会自动的创建PV和PVC。当创建 pvc 的时候,系统会通知 storageClass,storageClass 会从它所关联的分配器来获取后端存储类型,然后动态的创建一个 pv 出来和此 pvc 进行关联

storageClass 的工作流程

定义 storageClass 时必须要包含一个分配器(provisioner),不同的分配器指定了动态创建 pv时使用什么后端存储。

分配器使用 aws 的 ebs 作为 pv 的后端存储

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "10"
fsType: ext4

分配器使用 lvm 作为 pv 的后端存储

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-lvm
provisioner: lvmplugin.csi.alibabacloud.com
parameters:
vgName: volumegroup1
fsType: ext4
reclaimPolicy: Delete

使用 hostPath 作为 pv 的后端存储

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-hostpath-sc
provisioner: hostpath.csi.k8s.io
reclaimPolicy: Delete
#volumeBindingMode: Immediate
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

上面 3 个例子里所使用的分配器中,有一些是 kubernetes 内置的分配器,比如kubernetes.io/aws-ebs,其他两个分配器不是 kubernetes 自带的。kubernetes 自带的分配器:

  • kubernetes.io/aws-ebs
  • kubernetes.io/gce-pd
  • kubernetes.io/glusterfs
  • kubernetes.io/cinder
  • kubernetes.io/vsphere-volume
  • kubernetes.io/rbd
  • kubernetes.io/quobyte
  • kubernetes.io/azure-disk
  • kubernetes.io/azure-file
  • kubernetes.io/portworx-volume
  • kubernetes.io/scaleio
  • kubernetes.io/storageos
  • kubernetes.io/no-provisioner

在动态创建 pv 的时候,根据使用不同的后端存储,应该选择一个合适的分配器。但是像lvmplugin.csi.alibabacloud.com 和 hostpath.csi.k8s.io 这样的分配器不是 kubernetes 自带的,称之为外部分配器,这些外部分配器由第三方提供,是通过自定义 ** CSIDriver(容器存储接口驱动)来实现的分配器**。

所以整个流程就是,管理员创建storageClass时会通过provisioner 字段指定分配器。创建好storageClass之后,用户在定义pvc时需要通过.spec.storageClassName 指定使用哪个storageClass

利用 nfs 创建动态卷供应

创建一个目录/vdisk,并共享这个目录。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──[root@vms81.liruilongs.github.io]-[~]
└─$cat /etc/exports
/liruilong *(rw,sync,no_root_squash)
/tmp *(rw,sync,no_root_squash)
┌──[root@vms81.liruilongs.github.io]-[~]
└─$echo "/vdisk *(rw,sync,no_root_squash)" >>/etc/exports
┌──[root@vms81.liruilongs.github.io]-[~]
└─$exportfs -avr
exporting *:/vdisk
exportfs: Failed to stat /vdisk: No such file or directory
exporting *:/tmp
exporting *:/liruilong
┌──[root@vms81.liruilongs.github.io]-[/]
└─$mkdir vdisks

因为 kubernetes 里,nfs 没有内置分配器,所以需要下载相关插件来创建 nfs 外部分配器。

插件包下载地址: https://github.com/kubernetes-incubator/external-storage.git

rbac.yaml 部署 rbac 权限。命名空间更换

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

因为 nfs 分配器不是自带的,所以这里需要先把 nfs 分配器创建出来。

配置文件参数设置,1.20之后的版本都需要: - --feature-gates=RemoveSelfLink=false

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$pwd
/etc/kubernetes/manifests
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$head -n 20 kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.26.81:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.26.81
- --feature-gates=RemoveSelfLink=false
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$

deployment.yaml

  1. 因为当前是在命名空间 liruilong-volume-create 里的,所以要把 namespace 的值改为 liruilong-volume-create
  2. image 后面的镜像需要提前在所有节点上 pull 下来,并修改镜像下载策略
  3. env 字段里,PROVISIONER_NAME 用于指定分配器的名字,这里是 fuseim.pri/ifsNFS_SERVERNFS_PATH 分别指定这个分配器所使用的存储信息。
  4. volumes 里的 serverpath 里指定共享服务器和目录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: liruilong-volume-create
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.26.81
- name: NFS_PATH
value: /vdisk
volumes:
- name: nfs-client-root
nfs:
server: 192.168.26.81
path: /vdisk

部署 nfs 分配器,查看 pod 的运行情况

1
2
3
4
5
6
7
8
9
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl apply -f deployment.yaml
deployment.apps/nfs-client-provisioner created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-65b5569d76-cz6hh 1/1 Running 0 73s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$

创建了 nfs 分配器之后,下面开始创建一个使用这个分配器的 storageClass。

1
2
3
4
5
6
7
8
9
10
11
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get sc
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl apply -f class.yaml
storageclass.storage.k8s.io/managed-nfs-storage created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 3s

class.yaml

1
2
3
4
5
6
7
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"

这里 provisioner 的值 fuseim.pri/ifs 是由 deployment.yaml 文件里指定的分配器的名字,这
个 yaml 文件的意思是创建一个名字是managed-nfs-storagestorageClass,使用名字为fuseim.pri/ifs 的分配器。

下面开始创建 pvc

pvc_nfs.yaml

1
2
3
4
5
6
7
8
9
10
11
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Mi
storageClassName: "managed-nfs-storage"
1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f ./pvc_nfs.yaml
persistentvolumeclaim/pvc-nfs created

查看创建信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 35s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 30s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX managed-nfs-storage 28s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX Delete Bound liruilong-volume-create/pvc-nfs managed-nfs-storage 126m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy]
└─$

使用声明的PVC

pod_storageclass.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podvolumepvc
name: podvolumepvc
spec:
volumes:
- name: volumes1
persistentVolumeClaim:
claimName: pvc-nfs
containers:
- image: nginx
name: podvolumehostpath
resources: {}
volumeMounts:
- mountPath: /liruilong
name: volumes1
imagePullPolicy: IfNotPresent
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl apply -f pod_storageclass.yaml
pod/podvolumepvc created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 140m
podvolumepvc 1/1 Running 0 7s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create]
└─$kubectl describe pods podvolumepvc | grep -A 4 Volumes:
Volumes:
volumes1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-nfs
ReadOnly: false

创建本地供应卷

使用本地存储作为后端存储,一般情况下,PV只能是网络存储,不属于任何Node,所以通过NFS的方式比较多一点,SC会通过provisioner 字段指定分配器。创建好storageClass之后,用户在定义pvc时使用默认SC的分配存储

分配器及SC的创建: https://github.com/rancher/local-path-provisioner

yaml 文件下载不下来,所以浏览器访问然后复制下执行,我这里集群本来没有SC,如果小伙伴的集群有SC,直接设置默认SC即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$wget https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
--2022-10-15 21:45:02-- https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... 失败:拒绝连接。
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... 失败:拒绝连接。
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$vim local-path-storage.yaml
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$ [新] 128L, 2932C 已写入
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc -A
No resources found
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$mkdir -p /opt/local-path-provisioner
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl apply -f local-path-storage.yaml
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$

确认创建成功

1
2
3
4
┌──[root@vms81.liruilongs.github.io]-[~/awx/awx-operator]
└─$kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path rancher.io/local-path Delete WaitForFirstConsumer false 2m6s

设置为默认SC:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/

发布于

2022-10-19

更新于

2023-06-21

许可协议

评论
Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×